10 research outputs found

    Quantization Error Minimization by Reducing Median Difference at Quantization Interval Class

    Get PDF
    In this paper, a new technique to define the size of quantization interval is defined. In general, high quantization error will occur if large interval is used at a large difference value class whereas low quantization error will occur if a small interval is used at a large difference value class. However, the existence of too many class intervals will lead to a higher system complexity. Thus, this research is mainly about designing a quantization algorithm that can provide an efficient interval as possible to reduce the quantization error. The novelty of the proposed algorithm is to utilize the high occurrence of zero coefficient by re-allocating the non-zero coefficient in a group for quantization. From the experimental results provided, this new algorithm is able to produce a high compressed image without compromising with the image quality

    New Wavelet Domain Wiener Filter Based Denoising for Poisson Noise Removal in Low-Light Condition Digital Image (OTSU WIE-WATH)

    Get PDF
    Digital imaging was developed as early as 1960s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions. As digital technology became cheaper in later, digital images become very common and can simply captured using camera embedded in smartphone nowadays. Nevertheless, due to the limitation of camera technologies in low cost development, digital images are easily corrupted by various types of noise such as Salt and Pepper noise, Gaussian noise and Poisson noise. For digital image captured in the photon limited low light condition, the effect of image noise especially Poisson noise will be more obvious, degrading the quality of the image. Thus, this study aims to develop new denoising technique for Poisson noise removal in low light condition digital images. This study proposed a method which is referred to the OTSU WIE-WATH Filter which utilizes Otsu Threshold, Wiener Filter and Wavelet Threshold. This filter is designed for low and medium Poisson noise removal. The proposed filter performance is compared with other existing denoising techniques. These filters performances are analyzed with two evaluation methods which are objective method and subjective method. Objective method includes performance analysis in terms of Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE). On the other hand, subjective method used is visual effect inspection. The results show that proposed OTSU WIE-WATH Filter provide better performance than compared denoising techniques in low and medium levels Poisson noise removal while preserving the edges and fine details of noisy images

    Consolidating Literature for Images Compression and Its Techniques

    Get PDF
    With the proliferation of readily available image content, image compression has become a topic of considerable importance. As, rapidly increase of digital imaging demand, storage capability aspect should be considered. Therefore, image compression refers to reducing the size of image for minimizing storage without harming the image quality. Thus, an appropriate technique is needed for image compression for saving capacity as well as not losing valuable information. This paper consolidates literature whose characteristics have focused on image compression, thresholding algorithms, quantization algorithms. Later, related research on these areas are presented

    The Effect on Compressed Image Quality using Standard Deviation-Based Thresholding Algorithm

    Get PDF
    In recent decades, digital images have become increasingly important. With many modern applications use image graphics extensively, it tends to burden both the storage and transmission process. Despite the technological advances in storage and transmission, the demands placed on storage and bandwidth capacities still exceeded its availability. Compression is one of the solutions to this problem but elimination some of the data degrades the image quality. Therefore, the Standard Deviation-Based Thresholding Algorithm is proposed to estimate an accurate threshold value for a better-compressed image quality. The threshold value is obtained by examining the wavelet coefficients dispersion on each wavelet subband using Standard Deviation concept. The resulting compressed image shows a better image quality with PSNR value above 40dB

    Development of body stress analyzer based on physiological signal

    Get PDF
    Ehealth is one of the system that use modern technologies that be apply to the healthcare organization. This system allowed the patient to access the system to monitor their health records and else by using internet as a platform to communicate. Thus, this project is inspired by Ehealth system to making a hardware device with software application that act as monitoring system to the people. For the hardware, the user only need to grasped the two terminal in the device that contains of all the sensor for doing the testing procedure. Thus, all the data that be contain will be sent to the apps via internet of things for storing them. This will helps physical education and health teacher to do observation and record the data obtain from the apps.Besides, this way also will avoiding any incident to be occur towards the students if they carry on the activities in abnormal condition of their body system. The scope for this project is focusing on monitoring students body condition before they starting their outdoor activities in the field. The way for sensing the parameter including pulse sensor, LM35 and Galvanic Skin Response sensor with arduino as a microcontroller to process the input and output signal

    Feature Encoding and Selection for Iris Recognition Based on Variable Length Black Hole Optimization

    No full text
    Iris recognition as a biometric identification method is one of the most reliable biometric human identification methods. It exploits the distinctive pattern of the iris area. Typically, several steps are performed for iris recognition, namely, pre-processing, segmentation, normalization, extraction, coding and classification. In this article, we present a novel algorithm for iris recognition that includes in addition to iris features extraction and coding the step of feature selection. Furthermore, it enables selecting a variable length of features for iris recognition by adapting our recent algorithm variable length black hole optimization (VLBHO). It is the first variable length feature selection for iris recognition. Our proposed algorithm enables segments-based decomposition of features according to their relevance which makes the optimization more efficient in terms of both memory and computation and more promising in terms of convergence. For classification, the article uses the famous support vector machine (SVM) and the Logistic model. The proposed algorithm has been evaluated based on two iris datasets, namely, IITD and CASIA. The finding is that optimizing feature encoding and selection based on VLBHO is superior to the benchmarks with an improvement percentage of 0.21%

    Application of Data Mining Techniques for Medical Data Classification: A Review

    No full text
    This paper investigates the existing practices and prospects of medical data classification based on data mining techniques. It highlights major advanced classification approaches used to enhance classification accuracy. Past research has provided literature on medical data classification using data mining techniques. From extensive literature analysis, it is found that data mining techniques are very effective for the task of classification. This paper analysed comparatively the current advancement in the classification of medical data. The findings of the study showed that the existing classification of medical data can be improved further. Nonetheless, there should be more research to ascertain and lessen the ambiguities for classification to gain better precision

    Application of Data Mining Techniques for Medical Data Classification: A Review

    Get PDF
    This paper investigates the existing practices and prospects of medical data classification based on data mining techniques. It highlights major advanced classification approaches used to enhance classification accuracy. Past research has provided literature on medical data classification using data mining techniques. From extensive literature analysis, it is found that data mining techniques are very effective for the task of classification. This paper analysed comparatively the current advancement in the classification of medical data. The findings of the study showed that the existing classification of medical data can be improved further. Nonetheless, there should be more research to ascertain and lessen the ambiguities for classification to gain better precision

    Intrusion-detection system based on hybrid models: review paper

    No full text
    The Intrusion-detection systems (IDS) is currently one of the most important security tools. However, an IDS-based hybrid model offers better results than crime detection using the same algorithm. However, hybrid models based on conventional algorithms still face different problems. The objective of this study was to provide information on the most important assumptions and limitations of close hybrid analysis based on criminal analysis and to analyze the limitations of the new machine learning algorithm (FLN) to obtain IDS-based advice

    A comparative analysis among dual tree complex wavelet and other wavelet transforms based on image compression

    No full text
    Recently, the demand for efficient image compression algorithms have peeked due to storing and transmitting image requirements over long distance communication purposes. Image applications are now highly prominent in multimedia production, medical imaging, law enforcement forensics and defense industries. Hence, effective image compression offers the ability to record, store, transmit and analyze images for these applications in a very efficient manner. This paper offers a comparative analysis between the Dual Tree Complex Wavelet Transform (DTCWT) and other wavelet transforms such as Embedded Zerotree Wavelet (EZW), Spatial orientation Transform Wavelet (STW) and Lifting Wavelet Transform (LWT) for compressing gray scale images. The performances of these transforms will be compared by using objective measures such as peak signal to noise ratio (PSNR), mean squared error (MSE), compression ratio (CR), bit per pixel (BPP) and computational time (CT). The experimental results show that DTCWT provides better performance in term of PSNR and MSE and better reconstruction of image than other methods
    corecore